Goto

Collaborating Authors

 spatial location







PolarMix SupplementalMaterial

Neural Information Processing Systems

Wefirst implement global augmentation approaches including random rotation and random scaling on two LiDAR scans separately and thenconcatenate themfortraining. The more copies the better segmentation performance as shown in ' 1, 2, 3' in the table, which indicates the effectiveness of the approach in enriching data distribution. In this section, we conducted experiments to analyze how PolarMix benefits LiDAR point cloud learning. As a comparison, PolarMix is more robust to the instance spatial location without much performance drop. PolarMix improves the robustness of the baseline clearly with respect to the angular variations of instances (i.e.


595373f017b659cb7743291e920a8857-Paper.pdf

Neural Information Processing Systems

This is a challenging inference task given the need to reason beyond the local appearance of hands. The lack of training annotations indicating which object or parts of an object the hand is in contact with further complicates the task.



Neuronal Gaussian Process Regression

Neural Information Processing Systems

The brain takes uncertainty intrinsic to our world into account. For example, associating spatial locations with rewards requires to predict not only expected reward at new spatial locations but also its uncertainty to avoid catastrophic events and forage safely. A powerful and flexible framework for nonlinear regression that takes uncertainty into account in a principled Bayesian manner is Gaussian process (GP) regression. Here I propose that the brain implements GP regression and present neural networks (NNs) for it. First layer neurons, e.g.\ hippocampal place cells, have tuning curves that correspond to evaluations of the GP kernel. Output neurons explicitly and distinctively encode predictive mean and variance, as observed in orbitofrontal cortex (OFC) for the case of reward prediction. Because the weights of a NN implementing exact GP regression do not arise with biological plasticity rules, I present approximations to obtain local (anti-)Hebbian synaptic learning rules. The resulting neuronal network approximates the full GP well compared to popular sparse GP approximations and achieves comparable predictive performance.